Extensive empirical evidence demonstrates that conditional generative models are easier to train and perform better than unconditional ones by exploiting the labels of data. So do score-based diffusion models. In this paper, we analyze the phenomenon formally and identify that the key of conditional learning is to partition the data properly. Inspired by the analyses, we propose self-conditioned diffusion models (SCDM), which is trained conditioned on indices clustered by the k-means algorithm on the features extracted by a model pre-trained in a self-supervised manner. SCDM significantly improves the unconditional model across various datasets and achieves a record-breaking FID of 3.94 on ImageNet 64x64 without labels. Besides, SCDM achieves a slightly better FID than the corresponding conditional model on CIFAR10.
translated by 谷歌翻译
The ability to associate touch with sight is essential for tasks that require physically interacting with objects in the world. We propose a dataset with paired visual and tactile data called Touch and Go, in which human data collectors probe objects in natural environments using tactile sensors, while simultaneously recording egocentric video. In contrast to previous efforts, which have largely been confined to lab settings or simulated environments, our dataset spans a large number of "in the wild" objects and scenes. To demonstrate our dataset's effectiveness, we successfully apply it to a variety of tasks: 1) self-supervised visuo-tactile feature learning, 2) tactile-driven image stylization, i.e., making the visual appearance of an object more consistent with a given tactile signal, and 3) predicting future frames of a tactile signal from visuo-tactile inputs.
translated by 谷歌翻译
Automated detecting lung infections from computed tomography (CT) data plays an important role for combating COVID-19. However, there are still some challenges for developing AI system. 1) Most current COVID-19 infection segmentation methods mainly relied on 2D CT images, which lack 3D sequential constraint. 2) Existing 3D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3D volume. 3) The emergent breaking out of COVID-19 makes it hard to annotate sufficient CT volumes for training deep model. To address these issues, we first build a multiple dimensional-attention convolutional neural network (MDA-CNN) to aggregate multi-scale information along different dimension of input feature maps and impose supervision on multiple predictions from different CNN layers. Second, we assign this MDA-CNN as a basic network into a novel dual multi-scale mean teacher network (DM${^2}$T-Net) for semi-supervised COVID-19 lung infection segmentation on CT volumes by leveraging unlabeled data and exploring the multi-scale information. Our DM${^2}$T-Net encourages multiple predictions at different CNN layers from the student and teacher networks to be consistent for computing a multi-scale consistency loss on unlabeled data, which is then added to the supervised loss on the labeled data from multiple predictions of MDA-CNN. Third, we collect two COVID-19 segmentation datasets to evaluate our method. The experimental results show that our network consistently outperforms the compared state-of-the-art methods.
translated by 谷歌翻译
对于许多下游任务(例如,情感分析,关系检测等),脑电图(EEG)和语言已被广泛探索。研究这两个领域的多模式方法尚未得到很好的探索,即使近年来,多模式学习被认为比单峰对应物更强大。在这项研究中,我们希望探索脑电图与语言之间的关系和依赖性,即一个领域如何反映和代表另一个领域。为了研究表示级别的关系,我们引入了MTAM(一种多模式变压器对准模型),以观察两种模式之间的协调表示,因此采用了转换表示来进行下游应用。我们使用各种关系对齐的寻求对准技术,例如规范相关性分析和Wasserstein距离,作为转化低级语言的损失函数,并将EEG特征转化为高级转化的特征。在下游应用程序,情感分析和关系检测上,我们在两个数据集(Zuco和k-emocon)上实现了新的最新结果。我们的方法在K-Emocon的情感分析中获得了16.5%的F1得分提高,对Zuco的情感分析的26.6%,以及对Zuco的关系检测的31.1%。此外,我们通过以下方式提供对性能改进的解释:(1)可视化原始特征分布和变换的特征分布,显示对齐模块发现和编码脑电图与语言之间的关系的有效性; (2)可视化单词级别和句子级的脑电图对齐权重,显示不同语言语义和脑电图频率特征的影响; (3)可视化大脑地形图,以提供有关大脑区域中脑电图和语言反应的连通性的直观演示。
translated by 谷歌翻译
对应用深神网络自动解释和分析12铅心电图(ECG)的兴趣增加了。机器学习方法的当前范例通常受到标记数据量的限制。对于临床上的数据,这种现象尤其有问题,在该数据中,根据所需的专业知识和人类努力,规模标签可能是耗时且昂贵的。此外,深度学习分类器可能容易受到对抗性例子和扰动的影响,例如在医疗,临床试验或保险索赔的背景下应用时,可能会带来灾难性的后果。在本文中,我们提出了一种受生理启发的数据增强方法,以提高性能并根据ECG信号提高心脏病检测的鲁棒性。我们通过将数据分布驱动到瓦斯坦斯坦空间中的大地测量中的其他类别来获得增强样品。为了更好地利用领域特定的知识,我们设计了一个基础指标,该指标识别基于生理确定的特征的ECG信号之间的差异。从12铅ECG信号中学习,我们的模型能够区分五种心脏条件。我们的结果表明,准确性和鲁棒性的提高,反映了我们数据增强方法的有效性。
translated by 谷歌翻译
人行道表面数据的获取和评估在路面条件评估中起着至关重要的作用。在本文中,提出了一个称为RHA-NET的自动路面裂纹分割的有效端到端网络,以提高路面裂纹分割精度。 RHA-NET是通过将残留块(重阻)和混合注意块集成到编码器架构结构中来构建的。这些重组用于提高RHA-NET提取高级抽象特征的能力。混合注意块旨在融合低级功能和高级功能,以帮助模型专注于正确的频道和裂纹区域,从而提高RHA-NET的功能表现能力。构建并用于训练和评估所提出的模型的图像数据集,其中包含由自设计的移动机器人收集的789个路面裂纹图像。与其他最先进的网络相比,所提出的模型在全面的消融研究中验证了添加残留块和混合注意机制的功能。此外,通过引入深度可分离卷积生成的模型的轻加权版本可以更好地实现性能和更快的处理速度,而U-NET参数数量的1/30。开发的系统可以在嵌入式设备Jetson TX2(25 fps)上实时划分路面裂纹。实时实验拍摄的视频将在https://youtu.be/3xiogk0fig4上发布。
translated by 谷歌翻译
扩散概率模型(DPM)是一类强大的深层生成模型(DGM)。尽管取得了成功,但在整个时间段上的迭代生成过程效率要比其他DGMS(例如gans)效率要低得多。因此,时间步长上的生成性能至关重要,这受到DPMS中协方差设计的极大影响。在这项工作中,我们考虑对角和完整的协方差,以提高DPM的表现力。我们得出此类协方差的最佳结果,然后在DPM的平均值不完善时将其纠正。最佳和校正后的都可以分解为对噪声功能的条件期望的术语。在此基础上,我们建议通过学习这些条件期望来估计最佳协方差及其校正。我们的方法可以应用于离散时间和连续时间段的DPM。我们在实施计算效率方面考虑了对角协方差。为了进行有效的实际实施,我们采用参数共享方案和两阶段的培训过程。从经验上讲,我们的方法的表现优于可能性结果的各种协方差设计,并提高了样本质量,尤其是在少数时间段上。
translated by 谷歌翻译
Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of <instrument, verb, target> combination delivers comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms by competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery.
translated by 谷歌翻译
Learning effective motion features is an essential pursuit of video representation learning. This paper presents a simple yet effective sample construction strategy to boost the learning of motion features in video contrastive learning. The proposed method, dubbed Motion-focused Quadruple Construction (MoQuad), augments the instance discrimination by meticulously disturbing the appearance and motion of both the positive and negative samples to create a quadruple for each video instance, such that the model is encouraged to exploit motion information. Unlike recent approaches that create extra auxiliary tasks for learning motion features or apply explicit temporal modelling, our method keeps the simple and clean contrastive learning paradigm (i.e.,SimCLR) without multi-task learning or extra modelling. In addition, we design two extra training strategies by analyzing initial MoQuad experiments. By simply applying MoQuad to SimCLR, extensive experiments show that we achieve superior performance on downstream tasks compared to the state of the arts. Notably, on the UCF-101 action recognition task, we achieve 93.7% accuracy after pre-training the model on Kinetics-400 for only 200 epochs, surpassing various previous methods
translated by 谷歌翻译
Despite the surprising few-shot performance of in-context learning (ICL), it is still a common practice to randomly sample examples to serve as context. This paper advocates a new principle for ICL: self-adaptive in-context learning. The self-adaption mechanism is introduced to help each sample find an in-context example permutation (i.e., selection and ordering) that can derive the correct prediction, thus maximizing performance. To validate the effectiveness of self-adaptive ICL, we propose a general select-then-rank framework and instantiate it with new selection and ranking algorithms. Upon extensive evaluation on eight different NLP datasets, our self-adaptive ICL method achieves a 40% relative improvement over the common practice setting. Further analysis reveals the enormous potential of self-adaptive ICL that it might be able to close the gap between ICL and finetuning given more advanced algorithms. Our code is released to facilitate future research in this area: https://github.com/Shark-NLP/self-adaptive-ICL
translated by 谷歌翻译